PFL-MoE: Personalized Federated Learning Based on Mixture of Experts
نویسندگان
چکیده
Federated learning (FL) is an emerging distributed machine paradigm that avoids data sharing among training nodes so as to protect privacy. Under the coordination of FL server, each client conducts model using its own computing resource and private set. The global can be created by aggregating results clients. To cope with highly non-IID distributions, personalized federated (PFL) has been proposed improve overall performance allowing learn a model. However, one major drawback loss generalization. achieve personalization while maintaining better generalization, in this paper, we propose new approach, named PFL-MoE, which mixes outputs via MoE architecture. PFL-MoE generic approach instantiated integrating existing PFL algorithms. Particularly, PFL-MF algorithm instance based on freeze-base algorithm. We further enhancing decision-making ability gating network variant PFL-MFE. demonstrate effectiveness LeNet-5 VGG-16 models Fashion-MNIST CIFAR-10 datasets partitions.
منابع مشابه
the effect of lexically based language teaching (lblt) on vocabulary learning among iranian pre-university students
هدف پژوهش حاضر بررسی تاثیر روش تدریس واژگانی (واژه-محور) بر یادگیری لغات در بین دانش آموزان دوره پیش دانشگاهی است. بدین منظور دو گروه از دانش آموزان دوره پیش دانشگاهی (شصت نفر) که در سال تحصیلی 1389 در شهرستان نور آباد استان لرستان مشغول به تحصیل بودند انتخاب شده و به صورت قراردادی گروه آزمایش و گواه در نظر گرفته شدند. در ابتدا به منظور اطمینان یافتن از میزان همگن بودن دو گروه از دانش واژگان، آ...
15 صفحه اولCombining Classifiers and Learning Mixture-of-Experts
Expert combination is a classic strategy that has been widely used in various problem solving tasks. A team of individuals with diverse and complementary skills tackle a task jointly such that a performance better than any single individual can make is achieved via integrating the strengths of individuals. Started from the late 1980’ in the handwritten character recognition literature, studies ...
متن کاملRelevance Vector Machine based Mixture of Experts
The aim of this report is to detail the implementation of a sparse Bayesian Mixture of Experts (ME) [2] for solving a one-to-many regression mapping based on the relevance vector machine architecture. Our eventual goal is to evaluate the ME framework in human body and hand pose estimation from monocular view. However, this is left for future work. The application of ME is demonstrated using a t...
متن کاملA novel mixture of experts model based on cooperative coevolution
Combining several suitable neural networks can enhance the generalization performance of the group when compared to a single network alone. However it remains a largely open question, how best to build a suitable combination of individuals. Jacobs and his colleagues proposed the Mixture of Experts (ME) model, in which a set of neural networks are trained together with a gate network. This tight...
متن کاملA Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data
We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make eff...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-85896-4_37